Skip to primary navigation Skip to content
June 26, 2019

Chairwoman Johnson’s Opening Statement for AI Ethics Hearing

(Washington, DC) – Today, the House Committee on Science, Space, and Technology is holding a full Committee hearing titled, “Artificial Intelligence: Societal and Ethical Implications.”

Chairwoman Eddie Bernice Johnson’s (D-TX) opening statement for the record is below.

Good morning, and welcome to our distinguished panel of witnesses.

We are here today to learn about the societal impacts and ethical implications of a technology that is rapidly changing our lives, namely, Artificial intelligence.

From friendly robot companions to hostile terminators, AI has appeared in films and sparked our imagination for many decades. Today, AI is no longer a futuristic idea, at least not AI designed for specific tasks. Recent advances in computing power and increases in data production and collection have enabled AI-driven technology to be used in a growing number of sectors and applications, including in ways we may not realize. AI is routinely used to personalize advertisements when we browse the internet. It is also being used to determine who gets hired for a job or what kinds of student essays deserve a higher score.

AI systems can be a powerful tool for good, but they also carry risks. AI systems have been shown to exhibit gender discrimination when displaying job ads, racial discrimination in predictive policing, and socioeconomic discrimination when selecting which zip codes to offer commercial products or services.

The AI systems do not have an agenda, but the humans behind the algorithms can unwittingly introduce their personal biases and perspectives into the design and use of AI. The algorithms are then trained with data that is biased in ways both known and unknown. In addition to resulting in discriminatory decision-making, biases in the design and training of algorithms can also cause AI to fail in other ways, for example performing worse than clinicians in medical diagnostics.

We know that these risks exist. What we do not fully understand is how to mitigate them. We are also struggling with how to protect society against intended misuse and abuse of AI. There has been a proliferation of general AI ethics principles by companies and nations alike. The United States recently endorsed an international set of principles for the responsible development of AI. However, the hard work is in the translation of these principles into concrete, effective action. Ethics must be integrated at the earliest stages of AI research and education, and continue to be prioritized at every stage of design and deployment.

Federal agencies have been investing in AI technology for years. The White House recently issued an executive order on Maintaining American Leadership in AI and updated the 2016 National Artificial Intelligence R&D Strategic Plan. These are important steps. However, I also have concerns. First, to actually achieve leadership, we need to be willing to invest. Second, while a few individual agencies are making ethics a priority, the Administration’s executive order and strategic plan fall short in that regard. When mentioning it at all, they approach ethics as an add-on rather than an integral component of all AI R&D.

From improving healthcare, transportation, and education, to helping to solve poverty and improving climate resilience, AI has vast potential to advance the public good. However, this is a technology that will transcend national boundaries, and if the U.S. does not address AI ethics seriously and thoughtfully, we will lose the opportunity to become a leader in setting the international norms and standards for AI in the coming decades. Leadership is not just about advancing the technology, it’s about advancing it responsibly.

I look forward to hearing the insights and recommendations from today’s expert panel on how the United States can lead in the ethical development of AI.